948 research outputs found
A COMPARISON OF METHODS FOR SELECTING PREFERRED SOLUTIONS IN MULTIOBJECTIVE DECISION MAKING
ISBN : 978-94-91216-77-0In multiobjective optimization problems, the identified Pareto Frontiers and Sets often contain too many solutions, which make it difficult for the decision maker to select a preferred alternative. To facilitate the selection task, decision making support tools can be used in different instances of the multiobjective optimization search to introduce preferences on the objectives or to give a condensed representation of the solutions on the Pareto Frontier, so as to offer to the decision maker a manageable picture of the solution alternatives. This paper presents a comparison of some a priori and a posteriori decision making support methods, aimed at aiding the decision maker in the selection of the preferred solutions. The considered methods are compared with respect to their application to a case study concerning the optimization of the test intervals of the components of a safety system of a nuclear power plant. The engine for the multiobjective optimization search is based on genetic algorithms
An integrative clustering approach combining particle swarm optimization and formal concept analysis
Deterministic Sampling and Range Counting in Geometric Data Streams
We present memory-efficient deterministic algorithms for constructing
epsilon-nets and epsilon-approximations of streams of geometric data. Unlike
probabilistic approaches, these deterministic samples provide guaranteed bounds
on their approximation factors. We show how our deterministic samples can be
used to answer approximate online iceberg geometric queries on data streams. We
use these techniques to approximate several robust statistics of geometric data
streams, including Tukey depth, simplicial depth, regression depth, the
Thiel-Sen estimator, and the least median of squares. Our algorithms use only a
polylogarithmic amount of memory, provided the desired approximation factors
are inverse-polylogarithmic. We also include a lower bound for non-iceberg
geometric queries.Comment: 12 pages, 1 figur
Gauge fields, ripples and wrinkles in graphene layers
We analyze elastic deformations of graphene sheets which lead to effective
gauge fields acting on the charge carriers. Corrugations in the substrate
induce stresses, which, in turn, can give rise to mechanical instabilities and
the formation of wrinkles. Similar effects may take place in suspended graphene
samples under tension.Comment: contribution to the special issue of Solid State Communications on
graphen
Algorithms for Colourful Simplicial Depth and Medians in the Plane
The colourful simplicial depth of a point x in the plane relative to a
configuration of n points in k colour classes is exactly the number of closed
simplices (triangles) with vertices from 3 different colour classes that
contain x in their convex hull. We consider the problems of efficiently
computing the colourful simplicial depth of a point x, and of finding a point,
called a median, that maximizes colourful simplicial depth.
For computing the colourful simplicial depth of x, our algorithm runs in time
O(n log(n) + k n) in general, and O(kn) if the points are sorted around x. For
finding the colourful median, we get a time of O(n^4). For comparison, the
running times of the best known algorithm for the monochrome version of these
problems are O(n log(n)) in general, improving to O(n) if the points are sorted
around x for monochrome depth, and O(n^4) for finding a monochrome median.Comment: 17 pages, 8 figure
Robust high-dimensional precision matrix estimation
The dependency structure of multivariate data can be analyzed using the
covariance matrix . In many fields the precision matrix
is even more informative. As the sample covariance estimator is singular in
high-dimensions, it cannot be used to obtain a precision matrix estimator. A
popular high-dimensional estimator is the graphical lasso, but it lacks
robustness. We consider the high-dimensional independent contamination model.
Here, even a small percentage of contaminated cells in the data matrix may lead
to a high percentage of contaminated rows. Downweighting entire observations,
which is done by traditional robust procedures, would then results in a loss of
information. In this paper, we formally prove that replacing the sample
covariance matrix in the graphical lasso with an elementwise robust covariance
matrix leads to an elementwise robust, sparse precision matrix estimator
computable in high-dimensions. Examples of such elementwise robust covariance
estimators are given. The final precision matrix estimator is positive
definite, has a high breakdown point under elementwise contamination and can be
computed fast
On the Use of Minimum Volume Ellipsoids and Symplectic Capacities for Studying Classical Uncertainties for Joint Position-Momentum Measurements
We study the minimum volume ellipsoid estimator associates to a cloud of
points in phase space. Using as a natural measure of uncertainty the symplectic
capacity of the covariance ellipsoid we find that classical uncertainties obey
relations similar to those found in non-standard quantum mechanics
Robust regression for periodicity detection in non-uniformly sampled time-course gene expression data
<p>Abstract</p> <p>Background</p> <p>In practice many biological time series measurements, including gene microarrays, are conducted at time points that seem to be interesting in the biologist's opinion and not necessarily at fixed time intervals. In many circumstances we are interested in finding targets that are expressed periodically. To tackle the problems of uneven sampling and unknown type of noise in periodicity detection, we propose to use robust regression.</p> <p>Methods</p> <p>The aim of this paper is to develop a general framework for robust periodicity detection and review and rank different approaches by means of simulations. We also show the results for some real measurement data.</p> <p>Results</p> <p>The simulation results clearly show that when the sampling of time series gets more and more uneven, the methods that assume even sampling become unusable. We find that M-estimation provides a good compromise between robustness and computational efficiency.</p> <p>Conclusion</p> <p>Since uneven sampling occurs often in biological measurements, the robust methods developed in this paper are expected to have many uses. The regression based formulation of the periodicity detection problem easily adapts to non-uniform sampling. Using robust regression helps to reject inconsistently behaving data points.</p> <p>Availability</p> <p>The implementations are currently available for Matlab and will be made available for the users of R as well. More information can be found in the web-supplement <abbrgrp><abbr bid="B1">1</abbr></abbrgrp>.</p
Yet another breakdown point notion: EFSBP - illustrated at scale-shape models
The breakdown point in its different variants is one of the central notions
to quantify the global robustness of a procedure. We propose a simple
supplementary variant which is useful in situations where we have no obvious or
only partial equivariance: Extending the Donoho and Huber(1983) Finite Sample
Breakdown Point, we propose the Expected Finite Sample Breakdown Point to
produce less configuration-dependent values while still preserving the finite
sample aspect of the former definition. We apply this notion for joint
estimation of scale and shape (with only scale-equivariance available),
exemplified for generalized Pareto, generalized extreme value, Weibull, and
Gamma distributions. In these settings, we are interested in highly-robust,
easy-to-compute initial estimators; to this end we study Pickands-type and
Location-Dispersion-type estimators and compute their respective breakdown
points.Comment: 21 pages, 4 figure
ANALYTICAL QUALITY ASSESSMENT OF ITERATIVELY REWEIGHTED LEAST-SQUARES (IRLS) METHOD
The iteratively reweighted least-squares (IRLS) technique has been widelyemployed in geodetic and geophysical literature. The reliability measures areimportant diagnostic tools for inferring the strength of the model validation. Anexact analytical method is adopted to obtain insights on how much iterativereweighting can affect the quality indicators. Theoretical analyses and numericalresults show that, when the downweighting procedure is performed, (1) theprecision, all kinds of dilution of precision (DOP) metrics and the minimaldetectable bias (MDB) will become larger; (2) the variations of the bias-to-noiseratio (BNR) are involved, and (3) all these results coincide with those obtained bythe first-order approximation method
- …